Learning Visual Features to Recommend Grasp Configurations
نویسنده
چکیده
This paper is a preliminary account of current work on a visual system that learns to aid in robotic grasping and manipulation tasks. Localized features are learned of the visual scene that correlate reliably with the orientation of a dextrous robotic hand during haptically guided grasps. On the basis of these features, hand configurations are recommended for future gasping operations. The learning process is instancebased, on-line and incremental, and the interaction between visual and haptic systems is loosely anthropomorphic. It is conjectured that critical spatial information can be learned on the basis of features of visual appearance, without explicit geometric representations or planning.
منابع مشابه
Using Visual Features to Predict Successful Grasp Parameters
Visual features act as an important part for hand pre-shaping during human grasp. This paper focuses on using visual features in an image to predict successful grasp types, which can be used in the robot grasp manipulation. The following questions are discussed: First, how to recognize different shapes in a given image. Second, how to train the system using image-grasp pairs. Third, evaluate th...
متن کاملLearning visually guided grasping: a test case in sensorimotor learning
We present a general scheme for learning sensorimotor tasks which allows rapid on-line learning and generalization of the learned knowledge to unfamiliar objects. The scheme consists of two modules, the first generating candidate actions and the second estimating their quality. Both modules work in an alternating fashion until an action which is expected to provide satisfactory performance is g...
متن کاملHigh-level Reasoning and Low-level Learning for Grasping: A Probabilistic Logic Pipeline
While grasps must satisfy the grasping stability criteria, good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. In this paper, we consider such information for robot grasping by leveraging manifolds and symbolic object parts. Specifically, we introduce a new probabilistic logic module to first semant...
متن کاملLearning Visual Representations for Interactive Systems
We describe two quite different methods for associating action parameters to visual percepts. Our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results ...
متن کاملHow can I , robot , pick up that object with my hand ?
This paper describes a practical approach to the robot grasping problem. An approach that is composed of two different parts. First, a vision-based grasp synthesis system implemented on a humanoid robot able to compute a set of feasible grasps and to execute any of them. This grasping system takes into account gripper kinematics constraints and uses little computational effort. Second, a learni...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000